36 research outputs found

    A Novel Feature Selection Scheme and a Diversified-Input SVM-Based Classifier for Sensor Fault Classification

    Get PDF
    The efficiency of a binary support vector machine- (SVM-) based classifier depends on the combination and the number of input features extracted from raw signals. Sometimes, a combination of individual good features does not perform well in discriminating a class due to a high level of relevance to a second class also. Moreover, an increase in the dimensions of an input vector also degrades the performance of a classifier in most cases. To get efficient results, it is needed to input a combination of the lowest possible number of discriminating features to a classifier. In this paper, we propose a framework to improve the performance of an SVM-based classifier for sensor fault classification in two ways: firstly, by selecting the best combination of features for a target class from a feature pool and, secondly, by minimizing the dimensionality of input vectors. To obtain the best combination of features, we propose a novel feature selection algorithm that selects m out of M features having the maximum mutual information (or relevance) with a target class and the minimum mutual information with nontarget classes. This technique ensures to select the features sensitive to the target class exclusively. Furthermore, we propose a diversified-input SVM (DI-SVM) model for multiclass classification problems to achieve our second objective which is to reduce the dimensions of the input vector. In this model, the number of SVM-based classifiers is the same as the number of classes in the dataset. However, each classifier is fed with a unique combination of features selected by a feature selection scheme for a target class. The efficiency of the proposed feature selection algorithm is shown by comparing the results obtained from experiments performed with and without feature selection. Furthermore, the experimental results in terms of accuracy, receiver operating characteristics (ROC), and the area under the ROC curve (AUC-ROC) show that the proposed DI-SVM model outperforms the conventional model of SVM, the neural network, and the -nearest neighbor algorithm for sensor fault detection and classification

    Robust Epileptic Seizure Detection Using Long Short-Term Memory and Feature Fusion of Compressed Time–Frequency EEG Images

    Get PDF
    Epilepsy is a prevalent neurological disorder with considerable risks, including physical impairment and irreversible brain damage from seizures. Given these challenges, the urgency for prompt and accurate seizure detection cannot be overstated. Traditionally, experts have relied on manual EEG signal analyses for seizure detection, which is labor-intensive and prone to human error. Recognizing this limitation, the rise in deep learning methods has been heralded as a promising avenue, offering more refined diagnostic precision. On the other hand, the prevailing challenge in many models is their constrained emphasis on specific domains, potentially diminishing their robustness and precision in complex real-world environments. This paper presents a novel model that seamlessly integrates the salient features from the time–frequency domain along with pivotal statistical attributes derived from EEG signals. This fusion process involves the integration of essential statistics, including the mean, median, and variance, combined with the rich data from compressed time–frequency (CWT) images processed using autoencoders. This multidimensional feature set provides a robust foundation for subsequent analytic steps. A long short-term memory (LSTM) network, meticulously optimized for the renowned Bonn Epilepsy dataset, was used to enhance the capability of the proposed model. Preliminary evaluations underscore the prowess of the proposed model: a remarkable 100% accuracy in most of the binary classifications, exceeding 95% accuracy in three-class and four-class challenges, and a commendable rate, exceeding 93.5% for the five-class classification

    A Systematic and Comprehensive Survey of Recent Advances in Intrusion Detection Systems Using Machine Learning: Deep Learning, Datasets, and Attack Taxonomy

    Get PDF
    Recently, intrusion detection systems (IDS) have become an essential part of most organisations’ security architecture due to the rise in frequency and severity of network attacks. To identify a security breach, the target machine or network must be watched and analysed for signs of an intrusion. It is defined as efforts to compromise the confidentiality, integrity, or availability of a computer or network or to circumvent its security mechanisms. Several IDS have been proposed in the literature to efficiently detect such attempts exploiting different characteristics of cyberattacks. These systems can provide with timely sensing the network intrusions and, subsequently, notifying the manager or the responsible person in an organisation. Important actions are then carried out to reduce the degree of damage caused by the intrusion. Organisations use such techniques to defend their systems from the network disconnectivity and increase reliance on the information systems by employing intrusion detection. This paper presents a detailed summary of recent advances in IDS from the literature. Nevertheless, a review of future research directions for detecting malicious operations and launching different attacks on systems is discussed and highlighted. Furthermore, this study presents detailed description of well-known publicly available datasets and a variety of strategies developed for dealing with intrusions

    AI-Enabled Traffic Control Prioritization in Software-Defined IoT Networks for Smart Agriculture

    Get PDF
    Smart agricultural systems have received a great deal of interest in recent years because of their potential for improving the efficiency and productivity of farming practices. These systems gather and analyze environmental data such as temperature, soil moisture, humidity, etc., using sensor networks and Internet of Things (IoT) devices. This information can then be utilized to improve crop growth, identify plant illnesses, and minimize water usage. However, dealing with data complexity and dynamism can be difficult when using traditional processing methods. As a solution to this, we offer a novel framework that combines Machine Learning (ML) with a Reinforcement Learning (RL) algorithm to optimize traffic routing inside Software-Defined Networks (SDN) through traffic classifications. ML models such as Logistic Regression (LR), Random Forest (RF), k-nearest Neighbours (KNN), Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT) are used to categorize data traffic into emergency, normal, and on-demand. The basic version of RL, i.e., the Q-learning (QL) algorithm, is utilized alongside the SDN paradigm to optimize routing based on traffic classes. It is worth mentioning that RF and DT outperform the other ML models in terms of accuracy. Our results illustrate the importance of the suggested technique in optimizing traffic routing in SDN environments. Integrating ML-based data classification with the QL method improves resource allocation, reduces latency, and improves the delivery of emergency traffic. The versatility of SDN facilitates the adaption of routing algorithms depending on real-time changes in network circumstances and traffic characteristics

    IoT-Enabled Vehicle Speed Monitoring System

    Get PDF
    Millions of people lose their lives each year worldwide due to traffic law violations, specifically, over speeding. The existing systems fail to report most of such violations due to their respective flaws. For instance, speed guns work in isolation and cannot measure speed of all vehicles on roads at all spatial points. They can only detect the speed of the vehicle the line of sight of the camera. A solution is to deploy a huge number of speed guns at different locations on the road to detect and report vehicles that are over speeding. However, this solution is not feasible because it demands a large amount of equipment and computational resources to process such a big amount of data. In this paper, a speed detection framework is developed to detect vehicles’ speeds with only two speed guns, which can report speed even when the vehicle is not within the camera’s line of sight. The system is specifically designed for an irregular traffic scenario such as that of Pakistan, where it is inconvenient to install conventional systems. The idea is to calculate the average speed of vehicles traveling in a specific region, for instance, between two spatial points. A low-cost Raspberry Pi (RPi) module and an ordinary camera are deployed to detect the registration numbers on vehicle license plates. This hardware presents a more stable system since it is powered by a low consumption Raspberry Pi that can operate for hours without crashing or malfunctioning. More specifically, the entrance and exit locations and the time taken to get from one point to another are recorded. An automatic alert to traffic authorities is generated when a driver is over speeding. A detailed explanation of the hardware prototype and the algorithms is given, along with the setup configurations of the hardware prototype, the website, and the mobile device applications

    Throughput Maximization Using an SVM for Multi-Class Hypothesis-Based Spectrum Sensing in Cognitive Radio

    Get PDF
    A framework of spectrum sensing with a multi-class hypothesis is proposed to maximize the achievable throughput in cognitive radio networks. The energy range of a sensing signal under the hypothesis that the primary user is absent (in a conventional two-class hypothesis) is further divided into quantized regions, whereas the hypothesis that the primary user is present is conserved. The non-radio frequency energy harvesting-equiped secondary user transmits, when the primary user is absent, with transmission power based on the hypothesis result (the energy level of the sensed signal) and the residual energy in the battery: the lower the energy of the received signal, the higher the transmission power, and vice versa. Conversely, the lower is the residual energy in the node, the lower is the transmission power. This technique increases the throughput of a secondary link by providing a higher number of transmission events, compared to the conventional two-class hypothesis. Furthermore, transmission with low power for higher energy levels in the sensed signal reduces the probability of interference with primary users if, for instance, detection was missed. The familiar machine learning algorithm known as a support vector machine (SVM) is used in a one-versus-rest approach to classify the input signal into predefined classes. The input signal to the SVM is composed of three statistical features extracted from the sensed signal and a number ranging from 0 to 100 representing the percentage of residual energy in the node’s battery. To increase the generalization of the classifier, k-fold cross-validation is utilized in the training phase. The experimental results show that an SVM with the given features performs satisfactorily for all kernels, but an SVM with a polynomial kernel outperforms linear and radial-basis function kernels in terms of accuracy. Furthermore, the proposed multi-class hypothesis achieves higher throughput compared to the conventional two-class hypothesis for spectrum sensing in cognitive radio networks

    Wasserstein GAN-based Digital Twin Inspired Model for Early Drift Fault Detection in Wireless Sensor Networks

    Get PDF
    In this Internet of Things (IoT) era, the number of devices capable of sensing their surroundings is increasing day by day. Based on the data from these devices, numerous services and systems are now offered where critical decisions depend on the data collected by sensors. Therefore, error-free data are most desirable, but due to extreme operating environments, the possibility of faults occurring in sensors is high. So, detecting faults in data obtained by sensors is important. In this paper, a digital twin inspired detection approach is proposed, and its ability to detect a single type of fault in several sensor is analyzed. The digital equivalent of the sensor is developed using a Generative Adversarial Network (GAN). As GANs inherently performs well with images, Gramian Angular Field (GAF) encoding is used to convert timeseries data to image. The GAF encoding preserves the temporal relations of the timeseries data. The GAN is trained with the GAF images. The trained GAN model acts as the virtual representation of the sensor, and the discriminator network of the GAN model, once it has learned the pattern of normal data, is used as the fault detector. The performance of the virtual sensor is promising because it successfully generates data for normal conditions. The best fault detection accuracy achieved by the proposed model is 98.7%, which makes this GAN-based digital twin inspired approach a promising candidate for sensor fault detection

    Modeling and Analysis of DIPPM: A New Modulation Scheme for Visible Light Communications

    Get PDF
    Visible Light Communication (VLC) uses an Intensity-Modulation and Direct-Detection (IM/DD) scheme to transmit data. However, the light source used in VLC systems is continuously switched on and off quickly, resulting in flickering. In addition, recent illumination systems include dimming support to allow users to dim the light sources to the desired level. Therefore, the modulation scheme for data transmission in VLC system must include flicker mitigation and dimming control capabilities. In this paper, the authors propose a Double Inverse Pulse Position Modulation (DIPPM) scheme that minimizes flickering and supports a high level of dimming for the illumination sources in VLC systems. To form DIPPM, some changes are made in the symbol structure of the IPPM scheme, and a detailed explanation and mathematical model of DIPPM are given in this paper. Furthermore, both analytical and simulation results for the error performance of 2-DIPPM are compared with the performance of VPPM. Also, the communication performance of DIPPM is analyzed in terms of the normalized required power

    Toward a Lightweight Intrusion Detection System for the Internet of Things

    Get PDF
    Integration of the Internet into the entities of the different domains of human society (such as smart homes, health care, smart grids, manufacturing processes, product supply chains, and environmental monitoring) is emerging as a new paradigm called the Internet of Things (IoT). However, the ubiquitous and wide-range IoT networks make them prone to cyberattacks. One of the main types of attack is a denial of service (DoS), where the attacker floods the network with a large volume of data to prevent nodes from using the services. An intrusion detection mechanism is considered a chief source of protection for information and communications technology. However, conventional intrusion detection methods need to be modified and improved for application to the IoT owing to certain limitations, such as resource-constrained devices, the limited memory and battery capacity of nodes, and specific protocol stacks. In this paper, we develop a lightweight attack detection strategy utilizing a supervised machine learning-based support vector machine (SVM) to detect an adversary attempting to inject unnecessary data into the IoT network. The simulation results show that the proposed SVM-based classifier, aided by a combination of two or three incomplex features, can perform satisfactorily in terms of classification accuracy and detection time

    CAFD: Context-Aware Fault Diagnostic Scheme towards Sensor Faults Utilizing Machine Learning

    Get PDF
    Sensors’ existence as a key component of Cyber-Physical Systems makes it susceptible to failures due to complex environments, low-quality production, and aging. When defective, sensors either stop communicating or convey incorrect information. These unsteady situations threaten the safety, economy, and reliability of a system. The objective of this study is to construct a lightweight machine learning-based fault detection and diagnostic system within the limited energy resources, memory, and computation of a Wireless Sensor Network (WSN). In this paper, a Context-Aware Fault Diagnostic (CAFD) scheme is proposed based on an ensemble learning algorithm called Extra-Trees. To evaluate the performance of the proposed scheme, a realistic WSN scenario composed of humidity and temperature sensor observations is replicated with extreme low-intensity faults. Six commonly occurring types of sensor fault are considered: drift, hard-over/bias, spike, erratic/precision degradation, stuck, and data-loss. The proposed CAFD scheme reveals the ability to accurately detect and diagnose low-intensity sensor faults in a timely manner. Moreover, the efficiency of the Extra-Trees algorithm in terms of diagnostic accuracy, F1-score, ROC-AUC, and training time is demonstrated by comparison with cutting-edge machine learning algorithms: a Support Vector Machine and a Neural Networ
    corecore